spatio temporal reshaping of humans in videos

Image and Video Editing with StyleGAN3


Your browser does not support the video tag.
Slide 7 of 7.
Slide 1 of 7.
-->

Faces of the humans in the above videos are edited with stylegan

Abstract

StyleGAN is arguably one of the most intriguing and well-studied generative models, demonstrating impressive performance in image generation, inversion, and manipulation. In this work, we explore the recent StyleGAN3 architecture, compare it to its predecessor, and investigate its unique advantages, as well as drawbacks. In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery. Next, our analysis of the disentanglement of the different latent spaces of StyleGAN3 indicates that the commonly used W/W+ spaces are more entangled than their StyleGAN2 counterparts, underscoring the benefits of using the StyleSpace for fine-grained editing. Considering image inversion, we observe that existing encoder-based techniques struggle when trained on unaligned data. We therefore propose an encoding scheme trained solely on aligned data, yet can still invert unaligned images. Finally, we introduce a novel video inversion and editing workflow that leverages the capabilities of a fine-tuned StyleGAN3 generator to reduce texture sticking and expand the field of view of the edited video.

Image Editing

In the paper, we examine the effectiveness of various techniques for image editing with StyleGAN3. We show that methods that worked well with previous Style-based generators are also compatible for editing both synthetic and real images with StyleGAN3. However, since the latent spaces of StyleGAN3 are more entangled than its predecessors, we demonstrate that using the StyleSpace for latent-based editing is of high importance in the newer StyleGAN3 generator. By employing the translation and rotations on the Fourier features, we are able to edit both aligned and unaligned images, even when the generator itself was trained solely on aligned data.

In the above videos, we demonstrate edits on real images obtained using InterFaceGAN and StyleCLIP alongside the original inputs shown to the left. In the first row, we show edits of smile, black hair, red lipstick, and age while in the second row we show edits of smile, blonde hair, gender, and age.

Inverting and Editing Videos

The equivariance property of StyleGAN3 makes it potentially more siutable for encoding and editing videos. Beyond its ability to generate high-quality unaligned images, it has been shown that, unlike previous generators, StyleGAN3 does not have the texture-sticking phenomenon. In this paper, we combine our encoder with PTI to attain faithful and consistent reconstructions of a full video. Finally, we employ latent-based editing techniques to edit the video, and perform smoothing on the resulting latent codes to improve the video consistency. Below are some examples of our results where we show the original, reconstructed, and edited videos side-by-side:

We can also train StyleGAN-NADA and edit videos using various styles such as a Pixar cartoon or sketch.

Acknowledgements

-->